Current:Home > NewsFastexy:Tech giants pledge action against deceptive AI in elections -WealthRoots Academy
Fastexy:Tech giants pledge action against deceptive AI in elections
Chainkeen View
Date:2025-04-08 11:27:01
Tech giants including Microsoft,Fastexy Meta, Google, Amazon, X, OpenAI and TikTok unveiled an agreement on Friday aimed at mitigating the risk that artificial intelligence will disrupt elections in 2024.
The tech industry "accord" takes aim at AI-generated images, video and audio that could deceive voters about candidates, election officials and the voting process. But it stops short of calling for an outright ban on such content.
And while the agreement is a show of unity for platforms with billions of collective users, it largely outlines initiatives that are already underway, such as efforts to detect and label AI-generated content.
Fears over how AI could be used to mislead voters and maliciously misrepresent those running for office are escalating in a year that will see millions of people around the world head to the polls. Apparent AI-generated audio has already been used to impersonate President Biden discouraging Democrats from voting in New Hampshire's January primary and to purportedly show a leading candidate claiming to rig the vote in Slovakia's September election.
"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."
The companies rolled out the agreement at the Munich Security Conference, an annual gathering of heads of state, intelligence and military officials and diplomats dubbed the "Davos of Defense."
The agreement is a voluntary set of principles and commitments from the tech companies. It includes developing technology to watermark, detect and label realistic content that's been created with AI; assessing the models that underlie AI software to identify risks for abuse; and supporting efforts to educate the public about AI. The agreement does not spell out how the commitments will be enforced.
The companies signing the agreement include those that make tools for generating AI content, including OpenAI, Anthropic and Adobe. It's also been signed by Eleven Labs, whose voice cloning technology researchers believe was behind the fake Biden audio. Platforms that distribute content including Facebook and Instagram owner Meta, TikTok and X, the company formerly known as Twitter, also signed.
Nick Clegg, Meta's president of global affairs, said coming together as an industry, on top of work the companies are already doing, was necessary because of the scale of the threat AI poses.
"All our defenses are only as strong against the deceptive use of AI during elections as our collective efforts," he said. "Generative AI content doesn't just stay within the silo of one platform. It moves across the internet at great speed from one platform to the next."
With its focus on transparency, education, and detecting and labeling deceptive AI content rather than removing it, the agreement reflects the tech industry's hesitance to more aggressively police political content.
Critics on the right have mounted a pressure campaign in Congress and the courts against social media platform policies and partnerships with government agencies and academics aimed at tamping down election-related falsehoods. As a result, some tech companies have backed off those efforts. In particular, misinformation, propaganda and hate speech have all surged on X since Elon Musk's takeover, according to researchers.
New wrinkle to an old problem
Just how disruptive a force AI will be this election cycle remains an open and unanswerable question.
Some experts fear the risk is hard to overstate.
"The power afforded by new technologies that can be used by adversaries — it's going to be awful," said Joe Kiniry, chief scientist of the open-source election technology company Free & Fair. "I don't think we can do science-fiction writing right now that's going to approach some of the things we're going to see over the next year."
But election officials and the federal government maintain the effect will be more muted.
The Cybersecurity and Infrastructure Security Agency, the arm of the Department of Homeland Security tasked with election security, said in a recent report that generative AI capabilities "will likely not introduce new risks, but they may amplify existing risks to election infrastructure" like disinformation about voting processes and cybersecurity threats.
AI dominated many conversations at a conference of state secretaries of state earlier this month in Washington, D.C. Election officials were quick to note they've been fighting against misinformation about their processes for years, so in many ways AI's recent advance is just an evolution of something they are already familiar with.
"AI needs to be exposed for the amplifier that it is, not the great mysterious, world-changing, calamity-inducing monstrosity that some people are making it out to be," said Adrian Fontes, the Democratic secretary of state of Arizona. "It is a tool by which bad messages can spread, but it's also a tool by which great efficiencies can be discovered."
One specific worry that came up frequently was how difficult it is to encourage the public to be skeptical of what they see online without having that skepticism turn into a broader distrust and disengagement of all information.
Officials expect, for instance, that candidates will claim more and more that true information is AI-generated, a phenomenon known as the liar's dividend.
"It will become easier to claim anything is fake," Adriana Stephan, an election security analyst with CISA, said during a panel about AI at the conference.
Regulators are eyeing guardrails too
Many of the signatories to the new tech accord have already announced efforts that fall under the areas the agreement covers. Meta, TikTok and Google require users to disclose when they post realistic AI-generated content. TikTok has banned AI fakes of public figures when they're used for political or commercial endorsements. OpenAI doesn't allow its tools to be used for political campaigning, creating chatbots impersonating candidates, or discouraging people from voting.
Last week Meta said it will start labeling images created with leading AI tools in the coming months, using invisible markers the industry is developing. Meta also requires advertisers to disclose the use of AI in ads about elections, politics and social issues, and bars political advertisers from using the company's own generative AI tools to make ads.
Efforts to identify and label AI-generated audio and video are more nascent, even as they have already been used to mislead voters, as in New Hampshire.
But even as tech companies respond to pressure over how their products could be misused, they are also pushing ahead with even more advanced technology. On Thursday, OpenAI launched a tool that generates realistic videos up to a minute long from simple text prompts.
The moves by companies to voluntarily rein in the use of AI come as regulators are grappling with how to set guardrails on the new technology.
European lawmakers are poised in April to approve the Artificial Intelligence Act, a sweeping set of rules billed as the world's first comprehensive AI law.
In the U.S., a range of proposed federal laws regulating the technology, including banning deceptive deepfakes in elections and creating a new agency to oversee AI, haven't gained much traction. States are moving faster, with lawmakers in some 32 states introducing bills to regulate deepfakes in elections since the beginning of this year, according to the progressive advocacy group Public Citizen.
Critics of Silicon Valley say that while AI is amplifying existing threats in elections, the risks presented by technology are broader than the companies' newest tools.
"The leading tech-related threat to this year's elections, however, stems not from the creation of content with AI but from a more familiar source: the distribution of false, hateful, and violent content via social media platforms," researchers at the New York University Stern Center for Business and Human Rights wrote in a report this week criticizing content moderation changes made at Meta, Google and X.
veryGood! (1533)
Related
- Jorge Ramos reveals his final day with 'Noticiero Univision': 'It's been quite a ride'
- Sharna Burgess Deserves a 10 for Her Birthday Tribute to Fine AF Brian Austin Green
- Kim Kardashian Reacts After TikToker Claims SKIMS Shapewear Saved Her Life
- The Nordstrom Anniversary Sale 2023 is Open to All: Shop the Best Deals on Beauty, Fashion, Home & More
- New data highlights 'achievement gap' for students in the US
- Love Seen Lashes From RHONY Star Jenna Lyons Will Have You Taking a Bite Out of Summer
- See the Photos of Kylie Jenner and Jordyn Woods' Surprise Reunion After Scandal
- Ricky Martin’s 14-Year-Old Twins Surprise Him on Stage in Rare Appearance
- Trump suggestion that Egypt, Jordan absorb Palestinians from Gaza draws rejections, confusion
- Biden’s Top Climate Adviser Signals Support for Permitting Deal with Fossil Fuel Advocates
Ranking
- Off the Grid: Sally breaks down USA TODAY's daily crossword puzzle, Hi Hi!
- Department of Agriculture Conservation Programs Are Giving Millions to Farms That Worsen Climate Change
- UN Adds New Disclosure Requirements For Upcoming COP28, Acknowledging the Toll of Corporate Lobbying
- Why Kate Winslet Absolutely Roasted Robert Downey Jr. After His Failed The Holiday Audition
- Global Warming Set the Stage for Los Angeles Fires
- Reliving Every Detail of Jennifer Lopez and Ben Affleck's Double Wedding
- Ariana Grande Joined by Wicked Costar Jonathan Bailey and Andrew Garfield at Wimbledon
- Inexpensive Solar Panels Are Essential for the Energy Transition. Here’s What’s Happening With Prices Right Now
Recommendation
How to watch the 'Blue Bloods' Season 14 finale: Final episode premiere date, cast
Princess Charlotte Makes Adorable Wimbledon Debut as She Joins Prince George and Parents in Royal Box
Plans for I-55 Expansion in Chicago Raise Concerns Over Air Quality and Community Health
Fossil Fuel Companies Should Pay Trillions in ‘Climate Reparations,’ New Study Argues
Spooky or not? Some Choa Chu Kang residents say community garden resembles cemetery
Revisit Sofía Vergara and Joe Manganiello's Steamy Romance Before Their Break Up
How Dueling PDFs Explain a Fight Over the Future of the Grid
An Ohio College Town Wants to Lead on Fighting Climate Change. It Also Has a 1940s-Era, Diesel-Burning Power Plant